import os
import itertools
import cmdstanpy
import arviz as az
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from src.prepared_data import load_prepared_dataRecreating a full whole body ODE model using Stan
A report created using bibat version 0.1.1
Last time we fit a small whole body model using Stan and cmdstanpy. This time we will try and fit a bigger one - specifically the model from Pratt, Wattis, and Salter (2015), which Laura had kindly given me data for.
Setup
The first code block imports the Python packages that we will use for this analysis. The bigger model requires a little bit more data wrangling, which I handled in advance using bibat. In this document I just want to focus on the Stan bits so we will just call the pre-made data preparation function. Note that the code here should be run from the root of the pratt directory in order for the relative import at the bottom of this cell to work.
This code sets a global variable called SEED. We will pass this to any code that does random number generation in order to ensure that it generates the same numbers every time.
SEED = 123This cell uses a pre-prepared function to load all the data we will need in a nice format
prepared_data = load_prepared_data(os.path.join("data", "prepared", "pratt"))
species = prepared_data.species
parameters = prepared_data.parameters
compartments = prepared_data.compartments
S_cpt = prepared_data.compartment_stoichiometry.set_index("Unnamed: 0").drop("Unnamed: 0.1", axis=1)
S = prepared_data.stoichiometry.set_index("species").drop("Unnamed: 0", axis=1).drop("Unnamed: 48", axis=1)Stan code
The next cell loads a new Stan model. It’s almost exactly the same as the model from last time! The only differences are that:
- the
functionsblock contains only one line#include ode_pratt.stan. This is because the bigger model requires some quite involved functions. In order to not have to think about too many things at once, I put these functions in their own file. - the first argument to the
ode_bdffunction isget_dxdt_pratt. This is one of the new functions fromode_pratt.stan
model = cmdstanpy.CmdStanModel(stan_file=os.path.join("src", "stan", "pratt.stan"))
print(model.code())functions {
#include ode_pratt.stan
}
data {
int<lower=1> N_species;
int<lower=1> N_reaction;
int<lower=1> N_compartment;
int<lower=1> N_parameter;
int<lower=1> N_fixed_parameter;
int<lower=1> N_timepoint;
int<lower=0,upper=N_timepoint*N_species> N_measurement;
// network information
matrix[N_species,N_reaction] S_rxn;
matrix[N_species,N_compartment] S_cpt;
// known quantities
vector[N_compartment] compartment_volume;
vector[N_species] initial_concentration;
array[N_timepoint] real timepoint;
vector[N_fixed_parameter] fixed_parameter_value;
array[N_fixed_parameter] int<lower=1,upper=N_parameter> fixed_parameter_ix;
array[N_parameter-N_fixed_parameter] int<lower=1,upper=N_parameter> free_parameter_ix;
// measurement information
array[N_measurement] int<lower=1,upper=N_timepoint> y_timepoint_ix;
array[N_measurement] int<lower=1,upper=N_species> y_species_ix;
vector[N_measurement] y;
vector[N_measurement] y_sd;
// priors
array[2] vector[N_parameter-N_fixed_parameter] prior_p;
// run configuration
int<lower=0,upper=1> likelihood;
real rel_tol;
real abs_tol;
int<lower=0> max_num_steps;
}
parameters {
vector<lower=0>[N_parameter-N_fixed_parameter] p_free;
}
transformed parameters {
vector[N_parameter] p;
p[free_parameter_ix] = p_free;
p[fixed_parameter_ix] = fixed_parameter_value;
vector[N_reaction] flux_init = get_flux_pratt(0, initial_concentration, p);
array[N_timepoint] vector[N_species] conc =
ode_bdf_tol(get_dxdt_pratt,
initial_concentration,
0,
timepoint,
rel_tol,
abs_tol,
max_num_steps,
p,
S_rxn,
S_cpt,
compartment_volume);
}
model {
// prior model
p_free ~ lognormal(prior_p[1], prior_p[2]);
// measurement model
if (likelihood){
for (m in 1:N_measurement){
y[m] ~ lognormal(log(conc[y_timepoint_ix[m]][y_species_ix[m]]), y_sd[m]);
}
}
}
generated quantities {
vector[N_measurement] yrep;
for (m in 1:N_measurement){
yrep[m] = lognormal_rng(log(conc[y_timepoint_ix[m]][y_species_ix[m]]), y_sd[m]);
}
}
So most of the interesting new material is in the file ode_pratt.stan. The next code cell prints the contents of this file so that we can check it out.
The function mm_one_substrate is the same as last time. The function get_dxdt_pratt is exactly the same as the function get_dxdt_simple from last time, but with a different function for calculating the flux. The shorter new functions are pretty easy to read.
So the only really new thing is the long function get_flux_pratt. This function stores some numbers as variables, then calculates the flux for each reaction according to the rate laws from the source paper Pratt, Wattis, and Salter (2015).
Note that the parameter and species indexing is easier to follow and debug for the reactions whose fluxes are abstracted using functions compared with the reactions whose fluxes are calculated in place!
ode_file = os.path.join("src", "stan", "ode_pratt.stan")
with open(ode_file, "r") as f:
ode_code = f.read()
print(ode_code)/* Functions for calculating ODEs in the whole body model. */
real mm_one_substrate(real vmax, real km, real sub){
return vmax * sub / (km + sub);
}
real mass_action_one_substrate(real k, real x){
return k * x;
}
real mass_action_two_substrates(real k, real x1, real x2){
return k * x1 * x2;
}
real inhibition(real ki, real x){
return 1 / (1 + ki * x);
}
real get_diet(real t, vector breaks, real k, real d){
real out = 0;
for (b in breaks){
if (t > b){
out += k * (t - b) * exp(-(t - b) / d);
}
}
return out;
}
vector get_flux_pratt(real t, vector x, vector p){
real diet_tg = get_diet(t, [0, 300, 600]', 0.00185859360751066, 180);
real diet_glucose = get_diet(t, [0, 300, 600]', 0.139081199343437, 180);
real m_amp = p[55] / (p[56] * p[57] * x[18] + p[58] * p[59] * x[16] * x[1]);
return [
p[3] + p[4] *tanh((x[2]- p[5])/(0.8*p[6])), // vP_insulin_in
mass_action_one_substrate(p[7], x[1]), // vP_insulin_out
diet_tg, // vP_chylo_in
mass_action_one_substrate(p[8], x[3]), // vP_TG_to_L_FFA
p[9] * x[11] / (p[10] + x[11]), // vL_TG_stor_to_P_TG
mass_action_one_substrate(p[11], x[12]), // vL_TG_secr_to_P_TG
(1 + p[12] * x[1]) * p[13] * x[2], // vP_glucose_to_M_glucose
mass_action_two_substrates(p[14], x[3], p[1]), // vP_TG_to_M_FFA
p[15] * (1 + p[16] * x[1]) * x[2], // vP_glucose_to_A_glucose; p[16] i.e. kga is hardcoded to zero??
p[17], // vP_glucose_out
mass_action_one_substrate(p[18], x[4]), // vP_NEFA_to_M_FFA
mass_action_one_substrate(p[19], x[5]), // vP_chylo_to_M_FFA
mass_action_one_substrate(p[20], x[4]), // vP_NEFA_to_A_FFA
mass_action_two_substrates(p[21], x[3], p[2]), // vP_TG_to_A_FFA
mass_action_one_substrate(p[22], x[5]), // vP_chylo_to_L_FFA
mass_action_one_substrate(p[23], x[2]), // vP_glucose_to_L_glucose
mass_action_one_substrate(p[24], x[4]), // vP_NEFA_to_L_FFA
(1 - p[25]) * p[26] * (1 + p[27] * x[1]) * x[5] * p[2], //vP_chylo_to_A_FFA
p[25] * p[26] * (1 + p[27] * x[1]) * x[5] * p[2], // vP_chylo_to_P_NEFA
diet_glucose, // vL_glucose_in
p[28] * x[6] / (p[29] + x[6]) + p[30] * x[6] / (p[31] + x[6]) * (1 / (1 + p[32] * x[8])), // vL_glucose_to_L_G6P
p[33] * x[1] * x[8] * 0.5 * (1 + tanh((p[34] - x[7])/ p[35])), // vL_G6P_to_L_glycogen
mass_action_two_substrates(p[36], x[8], x[1]), // vL_G6P_to_L_pyruvate
p[37], // vL_pyruvate_in
mass_action_two_substrates(p[38] * p[39], x[9], x[1]), // vL_pyruvate_to_L_FFA
mm_one_substrate(p[40], p[41], x[12]), // vL_FFA_to_L_TG_secr
mm_one_substrate(p[42], p[43], x[12]), // vL_FFA_to_L_TG_stor
mass_action_one_substrate(p[44], x[6]), // vL_glucose_to_P_glucose
p[45] / (1 + p[46] * x[1]) * x[7] / (x[7] + p[47]), // vL_glycogen_to_L_G6P
p[48] * x[12] / (1 + p[49] * x[1]), // vL_FFA_out
mm_one_substrate(p[50], p[51], x[11]), // vL_TG_stor_to_L_FFA
mass_action_one_substrate(p[52], x[8]), // vL_G6P_to_L_glucose
p[53] * x[9] / (1 + p[54] * x[1]), // vL_pyruvate_to_L_G6P
p[60], // vM_TG_to_M_FFA
mass_action_two_substrates(p[61], x[1], x[18]), // vM_FFA_to_M_TG
mm_one_substrate(p[62], p[63], x[13]) * inhibition(p[32], x[15]), // vM_glucose_to_M_G6P
mm_one_substrate(p[64], p[65], x[14]) * inhibition(p[66], x[1]), // vM_glycogen_to_M_G6P
p[67] * x[1] * x[15] * 0.5 * (1 + tanh((p[68] - x[14]) / p[69])), // vM_G6P_to_M_glycogen
mass_action_two_substrates(p[70], x[15], x[1]), // vM_G6P_to_M_pyruvate
(1 + p[12] * x[1]) * p[71] * x[13], // vM_glucose_to_P_glucose
p[58] * x[16] * x[1] * m_amp, // vM_pyruvate_out
mass_action_two_substrates(p[56], x[18], m_amp),// vM_FFA_out
mass_action_one_substrate(p[72], x[16]), // vM_pyruvate_to_L_pyruvate
p[73] * x[1] * x[19] * x[22], // vA_FFA_and_A_glucose_to_A_TG
mass_action_one_substrate(p[74], x[20]), // vA_glycerol_to_L_G6P
p[75] / (1 + p[76] * x[1]^2), // vA_TG_to_P_NEFA_and_A_glycerol
p[15] * (1 + p[16] * x[1]) * x[19] // vA_glucose_to_P_glucose
]';
}
vector get_dxdt_pratt(real t, vector x, vector p, matrix S, matrix S_cpt, vector vol){
vector[cols(S)] flux = get_flux_pratt(t, x, p);
vector[rows(x)] a = (S * flux);
vector[rows(x)] b = (S_cpt * vol);
return a ./ b;
}
Data preparation
This can be done very similarly to last time.
The first step is to do a write bit of Python to define all the things we need.
species["compartment_volume"] = (
compartments
.set_index("name")
["value_pratt"]
.reindex(species["compartment"])
.values
)
species["initial_concentration"] = (
species["initial_abundance"] / species["compartment_volume"]
)
species_code = dict(zip(species["name"], range(1, len(species["name"]) + 1)))
parameters["prior_mu"] = np.log(parameters["value_pratt"])
parameters["prior_sigma"] = 0.15
parameters["is_fixed"] = parameters["value_pratt"] == 0
fixed_parameters = parameters.loc[lambda df: df["is_fixed"]]
free_parameters = parameters.loc[lambda df: ~df["is_fixed"]]
timepoint = np.linspace(0.01, 50, 5)
y = pd.DataFrame(
itertools.product(species["name"], range(len(timepoint))),
columns=["species", "timepoint_ix"]
).assign(sd=0.1)/Users/tedgro/.pyenv/versions/3.11.1/lib/python3.11/site-packages/pandas/core/arraylike.py:402: RuntimeWarning: divide by zero encountered in log
result = getattr(ufunc, method)(*inputs, **kwargs)
Now, just like last time, the next cell creates a dictionary whose keys match the variables in the data block of the Stan model and whose values cmdstanpy can ingest.
data_shared = {
"N_species": len(species),
"N_reaction": S.shape[1],
"N_compartment": len(compartments),
"N_parameter": len(parameters),
"N_fixed_parameter": len(fixed_parameters),
"N_timepoint": len(timepoint),
"N_measurement": len(y),
"S_rxn": S.values,
"S_cpt": S_cpt.values,
"compartment_volume": compartments["value_pratt"].values,
"initial_concentration": species["initial_concentration"].values,
"timepoint": timepoint,
"fixed_parameter_value": fixed_parameters["value_pratt"].values,
"fixed_parameter_ix": fixed_parameters.index.values + 1,
"free_parameter_ix": free_parameters.index.values + 1,
"y_timepoint_ix": y["timepoint_ix"].values + 1,
"y_species_ix": y["species"].apply(species_code.get).values,
"y_sd": y["sd"].values,
"prior_p": [
free_parameters["prior_mu"].values,
free_parameters["prior_sigma"].values
],
"rel_tol": 1e-12,
"abs_tol": 1e-12,
"max_num_steps": int(1e6)
}
data_gen = {**data_shared, **{"y": np.ones(len(y)), "likelihood": 0, }}coords={
"timepoint": timepoint,
"species": species["name"].values,
"parameter_name": parameters["name"].values,
"measurement_ix": y.index
}
dims={
"p": ["parameter_name"],
"conc": ["timepoint", "species"],
"yrep": ["measurement_ix"]
}
init = {"p": parameters["value_pratt"].values}
mcmc = model.sample(
data=data_gen,
inits=init,
fixed_param=True,
iter_sampling=1,
chains=1,
show_console=True
)
idata_gen = az.from_cmdstanpy(
mcmc,
posterior_predictive={"y": "yrep"},
coords=coords,
dims=dims
)15:06:28 - cmdstanpy - INFO - Chain [1] start processing
Chain [1] method = sample (Default)
Chain [1] sample
Chain [1] num_samples = 1
Chain [1] num_warmup = 1000 (Default)
Chain [1] save_warmup = 0 (Default)
Chain [1] thin = 1 (Default)
Chain [1] adapt
Chain [1] engaged = 1 (Default)
Chain [1] gamma = 0.050000000000000003 (Default)
Chain [1] delta = 0.80000000000000004 (Default)
Chain [1] kappa = 0.75 (Default)
Chain [1] t0 = 10 (Default)
Chain [1] init_buffer = 75 (Default)
Chain [1] term_buffer = 50 (Default)
Chain [1] window = 25 (Default)
Chain [1] algorithm = fixed_param
Chain [1] num_chains = 1 (Default)
Chain [1] id = 1 (Default)
Chain [1] data
Chain [1] file = /var/folders/ql/z_6fb5792v1_8tscf4hg5byc0000gp/T/tmp34bq0nox/jlruuubx.json
Chain [1] init = /var/folders/ql/z_6fb5792v1_8tscf4hg5byc0000gp/T/tmp34bq0nox/bzu_3kd7.json
Chain [1] random
Chain [1] seed = 1533
Chain [1] output
Chain [1] file = /var/folders/ql/z_6fb5792v1_8tscf4hg5byc0000gp/T/tmp34bq0nox/pratt6g7xzsit/pratt-20230417150628.csv
Chain [1] diagnostic_file = (Default)
Chain [1] refresh = 100 (Default)
Chain [1] sig_figs = -1 (Default)
Chain [1] profile_file = profile.csv (Default)
Chain [1] num_threads = 1 (Default)
Chain [1]
Chain [1] Iteration: 1 / 1 [100%] (Sampling)
15:06:45 - cmdstanpy - INFO - Chain [1] done processing
15:06:45 - cmdstanpy - WARNING - Non-fatal error during sampling:
Exception: lognormal_rng: Location parameter is nan, but must be finite! (in '/Users/tedgro/git/teddygroves/pratt/src/stan/pratt.stan', line 69, column 4 to column 84)
Consider re-running with show_console=True if the above output is unclear!
Chain [1] Exception: lognormal_rng: Location parameter is nan, but must be finite! (in '/Users/tedgro/git/teddygroves/pratt/src/stan/pratt.stan', line 69, column 4 to column 84)
Chain [1]
Chain [1] Elapsed Time: 0 seconds (Warm-up)
Chain [1] 0.227 seconds (Sampling)
Chain [1] 0.227 seconds (Total)
Chain [1]
Chain [1]
Uh oh, some warnings that mention CVODE! This is worrying because the ODE tolerance hyperparameters are already pretty strict. Time for some debugging…
Debugging
The number one rule for debugging Stan is to work in a high level language like Python as much as possible.
One way to do this is by writing trivial Stan programs that execute single functions or small bits of code. I wrote one of these to test the flux function get_flux_pratt and another one to test the ode step. Here is the flux program:
model_flux = cmdstanpy.CmdStanModel(
stan_file=os.path.join("src", "stan", "test_flux_pratt.stan")
)
print(model_flux.code())functions {
#include ode_pratt.stan
}
data {
int<lower=1> N_species;
int<lower=1> N_reaction;
int<lower=1> N_parameter;
vector<lower=0>[N_species] x;
vector<lower=0>[N_parameter] p;
}
generated quantities {
vector[N_reaction] flux = get_flux_pratt(0, x, p);
}
Note that there is no parameters or model block - it is better to avoid these where possible as code written in these blocks tends to be more computationally costly.
Since the new program’s data variables all have the same names as in our main model, we can just reuse the existing dictionary data_gen when running the test program.
data_flux = {
"N_species": len(species),
"N_reaction": S.shape[1],
"N_parameter": len(parameters),
"x": species["initial_concentration"].values,
"p": parameters["value_pratt"].values
}
mcmc_flux = model_flux.sample(data=data_flux, fixed_param=True, iter_sampling=1, seed=SEED)
idata_flux = az.from_cmdstanpy(
mcmc_flux,
coords={"reaction": S.columns, "parameter": parameters["name"], "species": species["name"]},
dims={"x": ["species"], "p": ["parameter"], "flux": ["reaction"]}
)
idata_flux.posterior["flux"].sel(chain=0, draw=0).to_series()15:06:45 - cmdstanpy - INFO - CmdStan start processing
15:06:45 - cmdstanpy - INFO - CmdStan done processing.
reaction
vP_insulin_in 8.413080
vP_insulin_out 86.500000
vP_chylo_in 0.000000
vP_TG_to_L_FFA 0.000941
vL_TG_stor_to_P_TG 0.009950
vL_TG_secr_to_P_TG 0.149364
vP_glucose_to_M_glucose 0.316444
vP_TG_to_M_FFA 0.007166
vP_glucose_to_A_glucose 1.281500
vP_glucose_out 0.630000
vP_NEFA_to_M_FFA 0.156375
vP_chylo_to_M_FFA 0.004805
vP_NEFA_to_A_FFA 0.048227
vP_TG_to_A_FFA 0.008770
vP_chylo_to_L_FFA 0.000803
vP_glucose_to_L_glucose 1.623230
vP_NEFA_to_L_FFA 0.107940
vP_chylo_to_A_FFA 0.013263
vP_chylo_to_P_NEFA 0.004421
vL_glucose_in 0.000000
vL_glucose_to_L_G6P 4.450380
vL_G6P_to_L_glycogen 0.050577
vL_G6P_to_L_pyruvate 0.072415
vL_pyruvate_in 0.133000
vL_pyruvate_to_L_FFA 0.000001
vL_FFA_to_L_TG_secr 0.006100
vL_FFA_to_L_TG_stor 0.063581
vL_glucose_to_P_glucose 2.439660
vL_glycogen_to_L_G6P 0.009092
vL_FFA_out 0.028690
vL_TG_stor_to_L_FFA 0.066853
vL_G6P_to_L_glucose 5.266520
vL_pyruvate_to_L_G6P 0.124980
vM_TG_to_M_FFA 0.747500
vM_FFA_to_M_TG 6.649560
vM_glucose_to_M_G6P 0.152526
vM_glycogen_to_M_G6P 3.601610
vM_G6P_to_M_glycogen 159.365000
vM_G6P_to_M_pyruvate 5.831700
vM_glucose_to_P_glucose 0.003398
vM_pyruvate_out 0.192862
vM_FFA_out 0.265363
vM_pyruvate_to_L_pyruvate 0.546274
vA_FFA_and_A_glucose_to_A_TG 0.411700
vA_glycerol_to_L_G6P 0.099923
vA_TG_to_P_NEFA_and_A_glycerol 0.093023
vA_glucose_to_P_glucose 1.245890
Name: flux, dtype: float64
Some of the fluxes are very high compared to the others - interesting!
Next we can do the same process for the ODE. Here is the test model:
model_ode = cmdstanpy.CmdStanModel(
stan_file=os.path.join("src", "stan", "test_ode_pratt.stan")
)
print(model_ode.code())functions{
#include ode_pratt.stan
}
data {
int<lower=1> N_species;
int<lower=1> N_parameter;
int<lower=1> N_reaction;
int<lower=1> N_compartment;
int<lower=1> N_timepoint;
matrix[N_species,N_reaction] S_rxn;
matrix[N_species,N_compartment] S_cpt;
vector[N_compartment] compartment_volume;
vector[N_species] initial_concentration;
array[N_timepoint] real timepoint;
real rel_tol;
real abs_tol;
int<lower=0> max_num_steps;
vector<lower=0>[N_parameter] p;
}
generated quantities {
array[N_timepoint] vector[N_species] conc =
ode_bdf_tol(get_dxdt_pratt,
initial_concentration,
0,
timepoint,
rel_tol,
abs_tol,
max_num_steps,
p,
S_rxn,
S_cpt,
compartment_volume);
}
Here is the output:
data_ode = {**data_shared, **{"p": parameters["value_pratt"].values}}
mcmc_ode = model_ode.sample(
data=data_ode, fixed_param=True, iter_sampling=1, seed=SEED
)
idata_ode = az.from_cmdstanpy(
mcmc_ode,
coords=coords,
dims=dims
)
idata_ode.posterior["conc"].sel(chain=0, draw=0).to_series().unstack("timepoint")15:06:45 - cmdstanpy - INFO - CmdStan start processing
15:06:45 - cmdstanpy - INFO - CmdStan done processing.
| timepoint | 0.0100 | 12.5075 | 25.0050 | 37.5025 | 50.0000 |
|---|---|---|---|---|---|
| species | |||||
| P_insulin | 49.970400 | 140.092000 | 372.023000 | 4.742790e+02 | 5.193630e+02 |
| P_glucose | 4.271550 | 20.780200 | 934.755000 | 5.813410e+04 | 4.561010e+06 |
| P_TG | 1.059980 | 1.219420 | 1.426050 | 1.636230e+00 | 1.828430e+00 |
| P_NEFA | 0.691910 | 0.672486 | 0.545178 | 4.255280e-01 | 3.384530e-01 |
| P_chylo | 0.106995 | 0.098151 | 0.100561 | 1.112360e-01 | 1.280620e-01 |
| L_glucose | 2.710720 | 5.038900 | 143.872000 | 8.566260e+03 | 6.464120e+05 |
| L_glycogen | 12.933200 | 13.037300 | 14.249400 | 1.839290e+01 | 2.518360e+01 |
| L_G6P | 1.314730 | 1.368460 | 3.330340 | 6.304380e+00 | 7.696560e+00 |
| L_pyruvate | 0.041161 | 0.115260 | -6.892920 | -4.980540e+01 | -7.707730e+01 |
| L_TG_secr | 0.014967 | -0.485405 | -1.138790 | -1.811440e+00 | -2.436440e+00 |
| L_TG_stor | 22.022200 | 22.021700 | 22.049900 | 2.208120e+01 | 2.210470e+01 |
| L_FFA | 0.373677 | 0.630301 | 0.696781 | 6.754970e-01 | 6.084820e-01 |
| M_glucose | 0.039273 | 0.497257 | 627.060000 | 1.713760e+05 | 2.641970e+07 |
| M_glycogen | 21.226200 | 25.083700 | 130.803000 | 1.445760e+02 | 1.492360e+02 |
| M_G6P | 0.004652 | 0.001652 | 0.138503 | 1.244370e-01 | 1.168720e-01 |
| M_pyruvate | 1.094860 | 0.017394 | 39.832000 | 6.967730e+01 | 7.323130e+01 |
| M_TG | 10.449800 | 17.222700 | 18.266700 | 1.869090e+01 | 1.905470e+01 |
| M_FFA | 17.743800 | 1.020240 | 0.291562 | 2.255830e-01 | 2.046040e-01 |
| A_glucose | 4.174150 | 335.878000 | 17523.200000 | 9.225420e+05 | 5.171460e+07 |
| A_glycerol | 0.475757 | 0.412840 | 0.146298 | 7.357310e-02 | 5.584140e-02 |
| A_TG | 519.749000 | 521.207000 | 521.321000 | 5.217070e+02 | 5.221900e+02 |
| A_FFA | 6.363950 | 0.002761 | 0.000022 | 3.544570e-07 | 6.333620e-09 |
Two species—L_TG_secr and L_pyruvate—are negative, which shouldn’t ever happen. Moreover, they are not close to zero, indicating that the problem is probably a coding error (we specified the equations incorrectly) rather than an ODE solver failure.
The species A_glucose, M_glucose and P_glucose get a very high concentrations, which might be unrealistic.